3 research outputs found
Concept Alignment as a Prerequisite for Value Alignment
Value alignment is essential for building AI systems that can safely and
reliably interact with people. However, what a person values -- and is even
capable of valuing -- depends on the concepts that they are currently using to
understand and evaluate what happens in the world. The dependence of values on
concepts means that concept alignment is a prerequisite for value alignment --
agents need to align their representation of a situation with that of humans in
order to successfully align their values. Here, we formally analyze the concept
alignment problem in the inverse reinforcement learning setting, show how
neglecting concept alignment can lead to systematic value mis-alignment, and
describe an approach that helps minimize such failure modes by jointly
reasoning about a person's concepts and values. Additionally, we report
experimental results with human participants showing that humans reason about
the concepts used by an agent when acting intentionally, in line with our joint
reasoning model
Correspondences between word learning in children and captioning models
For human children as well as machine learning systems, a key challenge in
learning a word is linking the word to the visual phenomena it describes. By
organizing model output into word categories used to analyze child language
learning data, we show a correspondence between word learning in children and
the performance of image captioning models. Although captioning models are
trained only on standard machine learning data, we find that their performance
in producing words from a variety of word categories correlates with the age at
which children acquire words from each of those categories. To explain why this
correspondence exists, we show that the performance of captioning models is
correlated with human judgments of the concreteness of words, suggesting that
these models are capturing the complex real-world association between words and
visual phenomena
Getting aligned on representational alignment
Biological and artificial information processing systems form representations
that they can use to categorize, reason, plan, navigate, and make decisions.
How can we measure the extent to which the representations formed by these
diverse systems agree? Do similarities in representations then translate into
similar behavior? How can a system's representations be modified to better
match those of another system? These questions pertaining to the study of
representational alignment are at the heart of some of the most active research
areas in cognitive science, neuroscience, and machine learning. For example,
cognitive scientists measure the representational alignment of multiple
individuals to identify shared cognitive priors, neuroscientists align fMRI
responses from multiple individuals into a shared representational space for
group-level analyses, and ML researchers distill knowledge from teacher models
into student models by increasing their alignment. Unfortunately, there is
limited knowledge transfer between research communities interested in
representational alignment, so progress in one field often ends up being
rediscovered independently in another. Thus, greater cross-field communication
would be advantageous. To improve communication between these fields, we
propose a unifying framework that can serve as a common language between
researchers studying representational alignment. We survey the literature from
all three fields and demonstrate how prior work fits into this framework.
Finally, we lay out open problems in representational alignment where progress
can benefit all three of these fields. We hope that our work can catalyze
cross-disciplinary collaboration and accelerate progress for all communities
studying and developing information processing systems. We note that this is a
working paper and encourage readers to reach out with their suggestions for
future revisions.Comment: Working paper, changes to be made in upcoming revision